68 research outputs found

    Long-Term Outcomes After Transcatheter Aortic Valve Implantation in High-Risk Patients With Severe Aortic Stenosis The U.K. TAVI (United Kingdom Transcatheter Aortic Valve Implantation) Registry

    Get PDF
    ObjectivesThe objective was to define the characteristics of a real-world patient population treated with transcatheter aortic valve implantation (TAVI), regardless of technology or access route, and to evaluate their clinical outcome over the mid to long term.BackgroundAlthough a substantial body of data exists in relation to early clinical outcomes after TAVI, there are few data on outcomes beyond 1 year in any notable number of patients.MethodsThe U.K. TAVI (United Kingdom Transcatheter Aortic Valve Implantation) Registry was established to report outcomes of all TAVI procedures performed within the United Kingdom. Data were collected prospectively on 870 patients undergoing 877 TAVI procedures up until December 31, 2009. Mortality tracking was achieved in 100% of patients with mortality status reported as of December 2010.ResultsSurvival at 30 days was 92.9%, and it was 78.6% and 73.7% at 1 year and 2 years, respectively. There was a marked attrition in survival between 30 days and 1 year. In a univariate model, survival was significantly adversely affected by renal dysfunction, the presence of coronary artery disease, and a nontransfemoral approach; whereas left ventricular function (ejection fraction <30%), the presence of moderate/severe aortic regurgitation, and chronic obstructive pulmonary disease remained the only independent predictors of mortality in the multivariate model.ConclusionsMidterm to long-term survival after TAVI was encouraging in this high-risk patient population, although a substantial proportion of patients died within the first year

    Data-driven sentence simplification: Survey and benchmark

    Get PDF
    Sentence Simplification (SS) aims to modify a sentence in order to make it easier to read and understand. In order to do so, several rewriting transformations can be performed such as replacement, reordering, and splitting. Executing these transformations while keeping sentences grammatical, preserving their main idea, and generating simpler output, is a challenging and still far from solved problem. In this article, we survey research on SS, focusing on approaches that attempt to learn how to simplify using corpora of aligned original-simplified sentence pairs in English, which is the dominant paradigm nowadays. We also include a benchmark of different approaches on common datasets so as to compare them and highlight their strengths and limitations. We expect that this survey will serve as a starting point for researchers interested in the task and help spark new ideas for future developments

    Integer Linear Programming for Natural Language Processing (Integer lineair programmeren voor natuurlijke taalverwerking)

    No full text
    The field of natural language processing has made great advances in thelast decades. Most of us have used Google translate, and have heard of Watson, the computer that won Jeopardy! Luckily, there are still many open problems in natural language processing, specifically with the machine learning algorithms behind it. A first problem is the training data. Most algorithms are supervised, and require annotated training data to learn from. This data has to be manually annotated, which is often an expensive and time consuming task. Therefore, there is a significant interest in weakly supervised and unsupervised methods, that learnwith less or no annotated data. A second problem is the computational complexity of the algorithms. Often approximate algorithms have to be used, leading to sub-optimal results. Especially methods that use little supervision tend to be very complex. In this thesis, we approach these two problems from an optimization point of view. By leveragingdecades of advances in the field of mathematical optimization, we are able to solve several natural language processing problems in an effective and efficient way. More specifically, we formulate the problems as constrained optimization problems, using integer linear programming. This allows us to find a globally optimal solution, and it allows us to encodeadditional knowledge through the use of constraints. We apply this strategy to several natural language processing problems, both on sentence level as on document level. On the sentence level we focus on sentence compression. We not only develop state-of-the-art methods, but also contribute by annotating data and comparing evaluation measures. On a document level we develop methods for rhetorical role classification, coreference resolution, and document simplification. Although these can all be solved as traditional classification problems, we develop methods that find a globally optimal solution. These methods are computationally more intensive, but we find efficient ways of solving them, by building on decades of research in the field of optimization theory. By efficiently finding a globally optimal solution we are able to outperform the current state of the art in many aspects.status: publishe

    Distributed Morphology: An oratio pro domo

    No full text
    In this paper we aim to explain and illustrate the theory of Distributed Morphology for non-specialists. The goal is to take away any misunderstandings and to provide some illustrations of the workings of the theory, mainly on the basis of data from Dutch. Distributed Morphology is a theory of morphology that embraces the so-called Separation Hypothesis: derivation ā€“ the forming of a new word by some abstract operation ā€“ is separated from affixation ā€“ the realization or spell-out of the abstract operation by the addition of some phonologically specified element. The means used by DM to implement the Separation Hypothesis is by late (after syntax) insertion of affixes. Furthermore, Distributed Morphology claims that there is no separate component of the grammar where word-formation takes place. The operations that form new words are the same operations that may create syntactic phrases. Starting from these fundamental claims, we go into some detail of the way Distributed Morphology accounts for different morphological patterns. The paper also points at some cognate, but alternative, approaches to word-formation and inflection. In particular, we briefly address Borerā€™s so-called exo-skeletal model, and the nanosyntactic approach

    Text simplification for children

    No full text
    The goal in this paper is to automatically transform text into a simpler text, so that it is easier to understand by children. We perform syntactic simplification, i.e. the splitting of sentences, and lexical simplification, i.e. replacing difficult words with easier synonyms. We test the performance of this approach for each component separately on a per sentence basis, and globally with the automatic construction of simplified news articles and encyclopedia articles. By including information from a language model in the lexical simplification step, we obtain better results over a baseline method. The syntactic simplification shows that some phenomena are hard to recognize by a parser, and that errors are often introduced. Although the reading difficulty goes down, it still doesnā€™t reach the required level needed for young children.status: publishe

    Sentence compression for Dutch using integer linear programming

    No full text
    Sentence compression is a valuable task in the framework of text summarization. In this paper we compress sentences from news articles taken from Dutch and Flemish newspapers using an integer linear programming approach. We rely on the Alpino parser available for Dutch and on the Latent Words Language Model. We demonstrate that the integer linear programming approach yields good results for compressing Dutch sentences, despite the large freedom in word order.status: publishe

    Integer linear programming for Dutch sentence compression

    No full text
    Sentence compression is a valuable task in the framework of text summarization. In this paper we compress sentences from news articles from Dutch and Flemish newspapers written in Dutch using an integer linear programming approach. We rely on the Alpino parser available for Dutch and on the Latent Words Language Model. We demonstrate that the integer linear programming approach yields good results for compressing Dutch sentences, despite the large freedom in word order.status: publishe

    A dataset for the evaluation of lexical simplification

    No full text
    Lexical Simplification is the task of replacing individual words of a text with words that are easier to understand, so that the text as a whole becomes easier to comprehend, e.g. by people with learning disabilities or by children who learn to read. Although this seems like a straightforward task, evaluating algorithms for this task is not so. The problem is how to build a dataset that provides an exhaustive list of easier to understand words in different contexts, and to obtain an absolute ordering on this list of synonymous expressions. In this paper we reuse existing resources for a similar problem, that of Lexical Substitution, and transform this dataset into a dataset for Lexical Simplification. This new dataset contains 430 sentences, with in each sentence one word marked. For that word, a list of words that can replace it, sorted by their difficulty, is provided. The paper reports on how this dataset was created based on the annotations of different persons, and their agreement. In addition we provide several metrics for computing the similarity between ranked lexical substitutions, which are used to assess the value of the different annotations, but which can also be used to compare the lexical simplifications suggested by an algorithm with the ground truth model.status: publishe

    Lexical simpliļ¬cation

    No full text
    In this paper we present a generic approach to lexical simpliļ¬cation, that is easily applicable to many languages. Lexical simpliļ¬cation helps children, illiterate, foreign, and disabled people to read texts, by replacing difficult words with words that are easier to understand. Although syntactic simpliļ¬cation has received a lot of attention, the work on lexical simpliļ¬cation has been limited. The methods regard the integration of dictionary information with a latent words language model.status: publishe
    • ā€¦
    corecore